Tags: hugging face*

0 bookmark(s) - Sort by: Date ↓ / Title /

  1. This page showcases the diverse collection of machine learning models and datasets provided by Arcee AI on Hugging Face. The collections include the advanced Trinity family of models, such as Trinity-Large-Thinking and Trinity-Mini, designed for various text generation tasks. Additionally, the repository features specialized datasets like Teacher Logits for distillation, the AFM 4.5B series, and various quantized flagship models like Virtuoso and SuperNova. These resources cater to researchers and developers looking for high-performance, specialized AI models ranging from small-scale nano versions to massive 399B parameter models, supporting tasks like feature extraction and text generation.
  2. Announcement that ggml.ai is joining Hugging Face to ensure the long-term sustainability and progress of the ggml/llama.cpp community and Local AI. Highlights continued open-source development, improved user experience, and integration with the Hugging Face transformers library.
  3. The open-source AI landscape is rapidly evolving, and recent developments surrounding GGML and Llama.cpp are significant for those interested in running large language models locally. GGML, a C library for machine learning, has joined Hugging Face, ensuring its continued development and accessibility. Meanwhile, Llama.cpp, a project focused on running Llama models on CPUs, remains open-source and is finding a stable home. This article details these changes, the implications for local AI enthusiasts, and the benefits of an open ecosystem.
  4. OpenAI releases gpt-oss-safeguard, an open-source AI model for content moderation that allows developers to define their own safety policies instead of relying on pre-trained models. It operates by reasoning about content based on custom policies, offering a more flexible and nuanced approach to moderation.
  5. This page details the command-line utility for the Embedding Atlas, a tool for exploring large text datasets with metadata. It covers installation, data loading (local and Hugging Face), visualization of embeddings using SentenceTransformers and UMAP, and usage instructions with available options.
  6. This article details five major updates to Gradio MCP servers, including seamless local file support, real-time progress notifications, OpenAPI spec to MCP conversion, improvements to authentication, and modifying tool descriptions. It highlights how these improvements enhance the development and hosting of AI-powered MCP servers on Hugging Face Spaces.
    2025-07-20 Tags: , , , , , by klotz
  7. Leveraging MCP for automating your daily routine. This article explores the Model Context Protocol (MCP) and demonstrates how to build a toolkit for analysts using it, including creating a local MCP server with useful tools and integrating it with AI tools like Claude Desktop.
  8. This course provides an introduction to the Model Context Protocol (MCP), covering its theory, design, and practical application. It includes foundational units, hands-on exercises, use case assignments, and collaboration opportunities. The course aims to equip students with the knowledge and skills to build AI applications leveraging external data and tools using MCP standards.
    2025-05-17 Tags: , , , , , by klotz
  9. A library for working with prompt templates locally or on the Hugging Face Hub. It aims to provide a standardized way of sharing and using prompt templates, with a focus on interoperability and modularity.
  10. This article details the creation of a simple, 50-line agent using Model Context Protocol (MCP) and Hugging Face's tools, demonstrating how easily agents can be built with modern LLMs that support function/tool calling.

    1. **MCP Overview**: MCP is a standard API for exposing tools that can be integrated with Large Language Models (LLMs).
    2. **Implementation**: The author explains how to implement a MCP client using TypeScript and the Hugging Face Inference Client. This client connects to MCP servers, retrieves tools, and integrates them into LLM inference.
    3. **Tools**: Tools are defined with a name, description, and parameters, and are passed to the LLM for function calling.
    4. **Agent Design**: An agent is essentially a while loop that alternates between tool calling and feeding tool results back into the LLM until a specific condition is met, such as two consecutive non-tool messages.
    5. **Code Example**: The article provides a concise 50-line TypeScript implementation of an agent, demonstrating the simplicity and power of MCP.
    6. **Future Directions**: The author suggests experimenting with different models and inference providers, as well as integrating local LLMs using frameworks like llama.cpp or LM Studio.

Top of the page

First / Previous / Next / Last / Page 1 of 0 SemanticScuttle - klotz.me: tagged with "hugging face"

About - Propulsed by SemanticScuttle